Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
2.
Med Teach ; 43(5): 575-582, 2021 05.
Article in English | MEDLINE | ID: mdl-33590781

ABSTRACT

BACKGROUND: Using revised Bloom's taxonomy, some medical educators assume they can write multiple choice questions (MCQs) that specifically assess higher (analyze, apply) versus lower-order (recall) learning. The purpose of this study was to determine whether three key stakeholder groups (students, faculty, and education assessment experts) assign MCQs the same higher- or lower-order level. METHODS: In Phase 1, stakeholders' groups assigned 90 MCQs to Bloom's levels. In Phase 2, faculty wrote 25 MCQs specifically intended as higher- or lower-order. Then, 10 students assigned these questions to Bloom's levels. RESULTS: In Phase 1, there was low interrater reliability within the student group (Krippendorf's alpha = 0.37), the faculty group (alpha = 0.37), and among three groups (alpha = 0.34) when assigning questions as higher- or lower-order. The assessment team alone had high interrater reliability (alpha = 0.90). In Phase 2, 63% of students agreed with the faculty as to whether the MCQs were higher- or lower-order. There was low agreement between paired faculty and student ratings (Cohen's Kappa range .098-.448, mean .256). DISCUSSION: For many questions, faculty and students did not agree whether the questions were lower- or higher-order. While faculty may try to target specific levels of knowledge or clinical reasoning, students may approach the questions differently than intended.


Subject(s)
Educational Measurement , Writing , Faculty , Humans , Reproducibility of Results , Students
3.
Med Educ Online ; 24(1): 1630239, 2019 Dec.
Article in English | MEDLINE | ID: mdl-31248355

ABSTRACT

Background: Teaching students how to create assessments, such as those involving multiple-choice questions (MCQs), has the potential to be a useful active learning strategy. In order to optimize students' learning, it is essential to understand how they engage with such activities. Objective: To explore medical students' perceptions of how completing rigorous MCQ training and subsequently writing MCQs affects their learning. Design: In this mixed methods exploratory qualitative study, eighteen second-year medical students, trained in MCQ-writing best practices, collaboratively generated a question bank. Subsequently, the authors conducted focus groups with eight students to probe impressions of the process and the effect on learning. Responses partially informed a survey consisting of open-ended and Likert rating scale questions that the remaining ten students completed. Focus group and survey data from the eighteen participants were iteratively coded and categorized into themes related to perceptions of training and of collaborative MCQ writing. Results: Medical students felt that training in MCQ construction affected their appreciation for MCQ examinations and their test-taking strategy. They perceived that writing MCQs required more problem-solving and content-integration compared to their preferred study strategies. Specifically, generating plausible distractors required the most critical reasoning to make subtle distinctions between diagnoses and treatments. Additionally, collaborating with other students was beneficial in providing exposure to different learning and question-writing approaches. Conclusions: Completing MCQ-writing training increases appreciation for MCQ assessments. Writing MCQs requires medical students to make conceptual connections, distinguish between diagnostic and therapeutic options, and learn from colleagues, but requires extensive time and knowledge base.


Subject(s)
Educational Measurement/methods , Problem-Based Learning/organization & administration , Students, Medical/psychology , Adult , Female , Humans , Male , Qualitative Research , Writing , Young Adult
4.
Acad Med ; 94(1): 71-75, 2019 01.
Article in English | MEDLINE | ID: mdl-30188369

ABSTRACT

PROBLEM: Multiple-choice question (MCQ) examinations represent a primary mode of assessment used by medical schools. It can be challenging for faculty to produce content-aligned, comprehensive, and psychometrically sound MCQs. Despite best efforts, sometimes there are unexpected issues with examinations. Assessment best practices lack a systematic way to address gaps when actual and expected outcomes do not align. APPROACH: The authors propose using root cause analysis (RCA) to systematically review unexpected educational outcomes. Using a real-life example of a class's unexpectedly low reproduction examination scores (University of Michigan Medical School, 2015), the authors describe their RCA process, which included a system flow diagram, a fishbone diagram, and an application of the 5 Whys to understand the contributors and reasons for the lower-than-expected performance. Using this RCA approach, the authors identified multiple contributing factors that potentially led to the low examination scores. These included lack of examination quality improvement (QI) for poorly constructed items, content-question and pedagogy-assessment misalignment, and other issues related to environment and people. OUTCOMES: As a result of the RCA, the authors worked with stakeholders to address these issues and develop strategies to prevent similar systematic issues from reoccurring. For example, a more robust examination QI process was developed. NEXT STEPS: Using an RCA approach in health care is grounded in practice and can be easily adapted for assessment. Because this is a novel use of RCA, there are opportunities to expand beyond the authors' initial approach for using RCA in assessment.


Subject(s)
Education, Medical/methods , Education, Medical/standards , Educational Measurement/methods , Educational Measurement/standards , Adult , Female , Humans , Male , Young Adult
5.
Acad Med ; 93(6): 856-859, 2018 06.
Article in English | MEDLINE | ID: mdl-29215375

ABSTRACT

Medical school assessments should foster the development of higher-order thinking skills to support clinical reasoning and a solid foundation of knowledge. Multiple-choice questions (MCQs) are commonly used to assess student learning, and well-written MCQs can support learner engagement in higher levels of cognitive reasoning such as application or synthesis of knowledge. Bloom's taxonomy has been used to identify MCQs that assess students' critical thinking skills, with evidence suggesting that higher-order MCQs support a deeper conceptual understanding of scientific process skills. Similarly, clinical practice also requires learners to develop higher-order thinking skills that include all of Bloom's levels. Faculty question writers and examinees may approach the same material differently based on varying levels of knowledge and expertise, and these differences can influence the cognitive levels being measured by MCQs. Consequently, faculty question writers may perceive that certain MCQs require higher-order thinking skills to process the question, whereas examinees may only need to employ lower-order thinking skills to render a correct response. Likewise, seemingly lower-order questions may actually require higher-order thinking skills to respond correctly. In this Perspective, the authors describe some of the cognitive processes examinees use to respond to MCQs. The authors propose that various factors affect both the question writer and examinee's interaction with test material and subsequent cognitive processes necessary to answer a question.


Subject(s)
Educational Measurement/methods , Students, Medical/psychology , Thinking , Choice Behavior , Cognition , Humans , Problem Solving
SELECTION OF CITATIONS
SEARCH DETAIL
...